We study representation learning for efficient imitation learning over linear systems. In particular, we consider a setting where learning is split into two phases: (a) a pre-training step where a shared $k$-dimensional representation is learned from $H$ source policies, and (b) a target policy fine-tuning step where the learned representation is used to parameterize the policy class. We find that the imitation gap over trajectories generated by the learned target policy is bounded by $\tilde{O}\left( \frac{k n_x}{HN_{\mathrm{shared}}} + \frac{k n_u}{N_{\mathrm{target}}}\right)$, where $n_x > k$ is the state dimension, $n_u$ is the input dimension, $N_{\mathrm{shared}}$ denotes the total amount of data collected for each policy during representation learning, and $N_{\mathrm{target}}$ is the amount of target task data. This result formalizes the intuition that aggregating data across related tasks to learn a representation can significantly improve the sample efficiency of learning a target task. The trends suggested by this bound are corroborated in simulation.
translated by 谷歌翻译
这项教程调查概述了统计学习理论中最新的非征血性进步与控制和系统识别相关。尽管在所有控制领域都取得了重大进展,但在线性系统的识别和学习线性二次调节器时,该理论是最发达的,这是本手稿的重点。从理论的角度来看,这些进步的大部分劳动都在适应现代高维统计和学习理论的工具。虽然与控制对机器学习的工具感兴趣的理论家高度相关,但基础材料并不总是容易访问。为了解决这个问题,我们提供了相关材料的独立介绍,概述了基于最新结果的所有关键思想和技术机械。我们还提出了许多开放问题和未来的方向。
translated by 谷歌翻译
我们从控制理论限制的角度研究随机策略梯度方法。我们的主要结果是,在Doyle的意义上,不可避免的线性系统不可避免地导致嘈杂的梯度估计。我们还举例说明了一类稳定系统的示例,其中政策梯度方法遭受了维度的诅咒。我们的结果适用于状态反馈和部分观察到的系统。
translated by 谷歌翻译
我们考虑了认证深神经网络对现实分布变化的鲁棒性的问题。为此,我们通过提出一个新型的神经符号验证框架来弥合手工制作的规格和现实部署设置之间的差距模型。这种环境引起的一个独特的挑战是,现有的验证者不能紧密地近似sigmoid激活,这对于许多最新的生成模型至关重要。为了应对这一挑战,我们提出了一个通用的元算象来处理乙状结肠激活,该乙状结激素利用反示例引导的抽象细化的经典概念。关键思想是“懒惰地”完善Sigmoid函数的抽象,以排除先前抽象中发现的虚假反示例,从而确保验证过程中的进展,同时保持状态空间较小。 MNIST和CIFAR-10数据集的实验表明,我们的框架在一系列具有挑战性的分配变化方面大大优于现有方法。
translated by 谷歌翻译
我们引入了变分状态空间过滤器(VSSF),这是从原始像素的无监督学习,识别和过滤潜伏的Larkov状态空间模型的新方法。在异构传感器配置下,我们为潜在的状态空间推断提出了理论上的声音框架。得到的模型可以集成训练期间使用的传感器测量的任意子集,从而实现半监督状态表示的学习,从而强制执行学习潜在状态空间的某些组件来达成可解释的测量。从此框架中,我们派生了L-VSSF,这是一个用线性潜在动态和高斯分布参数化的本模型的明确实例化。我们通过实验演示了L-VSSF在几个不同的测试环境中过滤超出训练数据集的序列长度的潜伏空间的能力。
translated by 谷歌翻译
在安全关键系统的背景下将模拟缩小到现实差距的动机,我们考虑学习用于未知非线性动力系统的前列鲁棒稳定性证书。符合鲁棒控制的方法,我们考虑添加系统动态的添加剂和Lipschitz有界对手。我们表明,在基础系统上的增量稳定性的合适假设下,学习对抗稳定证明的统计成本相当于持续因素,以学习名义稳定证明。我们的结果铰接在新的导火颤机复杂性的新型界限,这可能是独立的兴趣。据我们所知,这是在对动态系统生成的数据进行对抗性学习时,对样本复杂性限制的第一次表征。我们还提供一种用于近似对抗训练算法的实用算法,并在阻尼摆锤示例上验证我们的发现。
translated by 谷歌翻译
灵感来自近期跨越隐式学习在许多机器人任务的实证效果的程度,我们寻求了解隐式配方的理论优势,面对几乎不连续的功能,用于制造和破坏与环境中的环境接触的系统的共同特征和操纵。我们呈现并激励三种学习功能:一个明确和两个隐含。我们导出这三种方法中的每一个的泛化界限,揭示了基于预测误差损失的显式和隐式方法通常无法产生紧张的界限,与其他具有基于违规的丢失定义的其他隐含方法,这可以基本上更加强大地陡峭连续下坡。此外,我们证明这种违规的隐式损失可以紧密绑定图形距离,通常具有物理根源的数量并在输入和输出中处理噪声,而不是考虑输出噪声的预测损失。我们对违规隐性制剂的普遍性和身体相关性的洞察力与先前作品的匹配证据,并通过玩具问题验证,受到刚性联络模型的启发,并在整个理论分析中引用。
translated by 谷歌翻译
本文涉及专业示范的学习安全控制法。我们假设系统动态和输出测量图的适当模型以及相应的错误界限。我们首先提出强大的输出控制屏障功能(ROCBF)作为保证安全的手段,通过控制安全集的前向不变性定义。然后,我们提出了一个优化问题,以从展示安全系统行为的专家演示中学习RocBF,例如,从人类运营商收集的数据。随着优化问题,我们提供可验证条件,可确保获得的Rocbf的有效性。这些条件在数据的密度和学习函数的LipsChitz和Lipshitz和界限常数上说明,以及系统动态和输出测量图的模型。当ROCBF的参数化是线性的,然后,在温和的假设下,优化问题是凸的。我们在自动驾驶模拟器卡拉验证了我们的调查结果,并展示了如何从RGB相机图像中学习安全控制法。
translated by 谷歌翻译
The well-documented presence of texture bias in modern convolutional neural networks has led to a plethora of algorithms that promote an emphasis on shape cues, often to support generalization to new domains. Yet, common datasets, benchmarks and general model selection strategies are missing, and there is no agreed, rigorous evaluation protocol. In this paper, we investigate difficulties and limitations when training networks with reduced texture bias. In particular, we also show that proper evaluation and meaningful comparisons between methods are not trivial. We introduce BiasBed, a testbed for texture- and style-biased training, including multiple datasets and a range of existing algorithms. It comes with an extensive evaluation protocol that includes rigorous hypothesis testing to gauge the significance of the results, despite the considerable training instability of some style bias methods. Our extensive experiments, shed new light on the need for careful, statistically founded evaluation protocols for style bias (and beyond). E.g., we find that some algorithms proposed in the literature do not significantly mitigate the impact of style bias at all. With the release of BiasBed, we hope to foster a common understanding of consistent and meaningful comparisons, and consequently faster progress towards learning methods free of texture bias. Code is available at https://github.com/D1noFuzi/BiasBed
translated by 谷歌翻译
Pre-trained protein language models have demonstrated significant applicability in different protein engineering task. A general usage of these pre-trained transformer models latent representation is to use a mean pool across residue positions to reduce the feature dimensions to further downstream tasks such as predicting bio-physics properties or other functional behaviours. In this paper we provide a two-fold contribution to machine learning (ML) driven drug design. Firstly, we demonstrate the power of sparsity by promoting penalization of pre-trained transformer models to secure more robust and accurate melting temperature (Tm) prediction of single-chain variable fragments with a mean absolute error of 0.23C. Secondly, we demonstrate the power of framing our prediction problem in a probabilistic framework. Specifically, we advocate for the need of adopting probabilistic frameworks especially in the context of ML driven drug design.
translated by 谷歌翻译